10 research outputs found

    LCF-style Platform based on Multiway Decision Graphs

    Get PDF
    AbstractThe combination of state exploration approach (mainly model checking) and deductive reasoning approach (theorem proving) promises to overcome the limitation and to enhance the capabilities of each. In this paper, we are interested in defining a platform for Multiway Decision Graphs (MDGs) in LCF-style theorem prover. We define a platform to represent the MDG operations: conjunction, disjunction, relational product and prune-by-subsumption as a set of inference rules. Based on this platform, the reachability analysis is implemented as a conversion that uses the MDG theory within the HOL theorem prover. Finally, we present some experimental results to show the performance of the MDG operations of our platform

    The verification of MDG algorithms in the HOL theorem prover

    Get PDF
    Formal verification of digital systems is achieved, today, using one of two main approaches: states exploration (mainly model checking and equivalence checking) or deductive reasoning (theorem proving). Indeed, the combination of the two approaches, states exploration and deductive reasoning promises to overcome the limitation and to enhance the capabilities of each. Our research is motivated by this goal. In this thesis, we provide the entire necessary infrastructure (data structure + algorithms) to define high level states exploration in the HOL theorem prover named as MDG-HOL platform. While related work has tackled the same problem by representing primitive Binary Decision Diagram (BDD) operations as inference rules added to the core of the theorem prover, we have based our approach on the Multiway Decision Graphs (MDGs). MDG generalizes ROBDD to represent and manipulate a subset of first-order logic formulae. With MDGs, a data value is represented by a single variable of an abstract type and operations on data are represented in terms of uninterpreted function. Considering MDGs instead of BDDs will raise the abstraction level of what can be verified using a state exploration within a theorem prover. The MDGs embedding is based on the logical formulation of an MDG as a Directed Formulae (DF). The DF syntax is defined as HOL built-in data types. We formalize the basic MDG operations using this syntax within HOL following a deep embedding approach. Such approach ensures the consistency of our embedding. Then, we derive the correctness proof for each MDG basic operator. Based on this platform, the MDG reachability analysis is defined in HOL as a conversion that uses the MDG theory within HOL. Then, we demonstrate the effectiveness of our platform by considering four case studies. Our obtained results show that this verification framework offers a considerable gain in terms of automation without sacrificing CPU time and memory usage compared to automatic model checker tools. Finally, we propose a reduction technique to improve MDGs model checking based on the MDG-HOL platform. The idea is to prune the transition relation of the circuits using pre-proved theorems and lemmas from the specification given at system level. We also use the consistency of the specifications to verify if the reduced model is faithful to the original one. We provide two case studies, the first one is the reduction using SAT-MDG of an Island Tunnel Controller and the second one is the MDG-HOL assume-guarantee reduction of the Look-Aside Interface. The obtained results of our approach offers a considerable gain in terms of heuristics and reduction techniques correctness as to commercial model checking; however a small penalty is paid in terms of CPU time and memory usag

    Efficient implementation of parallel SAT solver based stochastic local search

    No full text

    Formal reasoning about synthetic biology using higher-order-logic theorem proving

    No full text

    Performance evaluation of the SM4 cipher based on field‐programmable gate array implementation

    No full text
    Abstract Information security is essential to ensure security of exchanged sensitive data in resource‐constrained devices (RCDs) because they are used widely in the Internet of things (IoT). The implementation of special ciphers is required in these RCDs, as they have many limitations and constraints, such as low power/energy dissipation, and require low hardware resources. The SM4 cipher is one of the common block ciphers, which can be easily implemented and offers a high level of security. The objective of this study is to determine the optimum field‐programmable gate array (FPGA) design for SM4 to facilitate reconfiguring the FPGA with an optimum design during operation. Various FPGA design options for SM4 ciphers are examined, and the performance metrics are modeled: power, energy, area, and speed. Scalar and pipelined designs with one or multiple hardware rounds are considered without altering the cipher algorithm. The results show that the best scalar implementation utilises less resources than the pipelined implementations by 7%. Alternatively, pipelined implementations perform better regarding speed and energy dissipation by 10 times and 40% of the scalar implementation, respectively. The pipeline implementations with eight or 16 rounds are optimum for continuous streams of data, and the two‐round design is the optimum design across ciphers

    Error metrics determination in functionally approximated circuits using SAT solvers.

    No full text
    Approximate computing is an emerging design paradigm that offers trade-offs between output accuracy and computation efforts by exploiting some applications' intrinsic error resiliency. Computation of error metrics is of paramount importance in approximate circuits to measure the degree of approximation. Most of the existing techniques for evaluating error metrics apply simulations which may not be effective for evaluation of large complex designs because of an immense increase in simulation runtime and a decrease in accuracy. To address these deficiencies, we present a novel methodology that employs SAT (Boolean satisfiability) solvers for fast and accurate determination of error metrics specifically for the calculation of an average-case error and the maximum error rate in functionally approximated circuits. The proposed approach identifies the set of all errors producing assignments to gauge the quality of approximate circuits for real-life applications. Additionally, the proposed approach provides a test generation method to facilitate design choices, and acts as an important guide to debug the approximate circuits to discover and locate the errors. The effectiveness of the approach is demonstrated by evaluating the error metrics of several benchmark-approximated adders of different sizes. Experimental results on benchmark circuits show that the proposed SAT-based methodology accurately determines the maximum error rate and an average-case error within acceptable CPU execution time in one go, and further provides a log of error-generating input assignments

    An efficient multiple sclerosis segmentation and detection system using neural networks

    No full text
    In this work, an efficient multiple sclerosis (MS) segmentation technique is proposed to simplify pre-processing steps and diminish processing time using heterogeneous single-channel magnetic resonance imaging (MRI). A spatial-filtering image mapping, histogram reference image, and histogram matching techniques are effectively applied to possess a local threshold per image using the global threshold algorithm. Feature extraction is performed using mathematical and morphological operations, and a multilayer feed-forward neural network (MLFFNN) is used identify multiple sclerosis' tissues. Fluid-attenuated inversion recovery (FLAIR) series are used to integrate a faster system while maintaining reliability and accuracy. A sagittal (SAG) FLAIR based system is proposed for the first time in MS detection systems, which reduces the number of utilized images, and decreases the processing time by nearly one-third. Our detection system provided a significant recognition rate of up to 98.5%. Moreover, a relatively high dice coefficient (DC) value (0.71 +/- 0.18) was observed upon testing new images
    corecore